36 research outputs found

    A fast algorithm to compute cohomology group generators of orientable 2-manifolds

    Get PDF
    In this paper a fast algorithm to compute cohomology group generators of cellular decomposition of any orientable closed 2-manifold is presented. The presented algorithm is a dual version of algorithm to compute homology generators presented by David Eppstein [12] and developed by Jeff Erickson and Kim Whittlesey [13]

    Visualising the Evolution of English Covid-19 Cases with Topological Data Analysis Ball Mapper

    Full text link
    Understanding disease spread through data visualisation has concentrated on trends and maps. Whilst these are helpful, they neglect important multi-dimensional interactions between characteristics of communities. Using the Topological Data Analysis Ball Mapper algorithm we construct an abstract representation of NUTS3 level economic data, overlaying onto it the confirmed cases of Covid-19 in England. In so doing we may understand how the disease spreads on different socio-economical dimensions. It is observed that some areas of the characteristic space have quickly raced to the highest levels of infection, while others close by in the characteristic space, do not show large infection growth. Likewise, we see patterns emerging in very different areas that command more monitoring. A strong contribution for Topological Data Analysis, and the Ball Mapper algorithm especially, in comprehending dynamic epidemic data is signposted.Comment: Updated to include April 17 202

    Computing The Cubical Cohomology Ring (Extended Abstract)

    Get PDF
    The goal of this work is to establish a new algorithm for computing the cohomology ring of cubical complexes. The cubical structure enables an explicit recurrence formula for the cup product. We derive this formula and, next, show how to extend the Mrozek and Batko [7] homology coreduction algorithm to the cohomology ring structure. The implementation of the algorithm is a work in progress. This research is aimed at applications in electromagnetism and in image processing, among other fields

    Persistence Norms and the Datasaurus

    Full text link
    Topological Data Analysis (TDA) provides a toolkit for the study of the shape of high dimensional and complex data. While operating on a space of persistence diagrams is cumbersome, persistence norms provide a simple real value measure of multivariate data which is seeing greater adoption within finance. A growing literature seeks links between persistence norms and the summary statistics of the data being analysed. This short note targets the demonstration of differences in the persistence norms of the Datasaurus datasets of Matejka and Fitzmaurice. We show that persistence norms can be used as additional measures that often discriminate datasets with the same collection of summary statistics. Treating each of the data sets as a point cloud we construct the L1L_1 and L2L_2 persistence norms in dimensions 0 and 1. We show multivariate distributions with identical covariance and correlation matrices can have considerably different persistence norms. Through the example, we remind users of persistence norms of the importance of checking the distribution of the point clouds from which the norms are constructed.Comment: 18 pages, 5 figures, 5 table

    Lean cohomology computation for electromagnetic modeling

    Get PDF
    Solving eddy current problems formulated by using a magnetic scalar potential in the insulator requires a topological pre-processing to find the so-called first cohomology basis of the insulating region, which may be very time-consuming for challenging industrially driven problems. The physics-inspired D\u142otko-Specogna (DS) algorithm was shown to be superior to alternatives in performing such a topological pre-processing. Yet, the DS algorithm is particularly fast when it produces as output not a regular cohomology basis but a so-called lazy one, which contains the regular one but it keeps also some additional redundant elements. Having a regular basis may be advantageous over the lazy basis if a technique to produce it would take about the same time as the computation of a lazy basis. In the literature, such a technique is missing. This paper covers this gap by introducing modifications to the DS algorithm to compute a regular basis of the first cohomology group in practically the same time as the generation of a lazy cohomology basis. The speedup of this modified DS algorithm with respect to the best alternative reaches more than two orders of magnitudes on challenging benchmark problems. This demonstrates the potential impact of the proposed contribution in the low-frequency computational electromagnetics community and beyond. \ua9 2017 IEEE

    Refining understanding of corporate failure through a topological data analysis mapping of Altman’s Z-score model

    Get PDF
    Corporate failure resonates widely leaving practitioners searching for understanding of default risk. Managers seek to steer away from trouble, credit providers to avoid risky loans and investors to mitigate losses. Applying Topological Data Analysis tools this paper explores whether failing firms from the United States organise neatly along the five predictors of default proposed by the Z-score models. Firms are represented as a point cloud in a five dimensional space, one axis for each predictor. Visualising that cloud using Ball Mapper reveals failing firms are not often neighbours. As new modelling approaches vie to better predict firm failure, often using black boxes to deliver potentially over-fitting models, a timely reminder is sounded on the importance of evidencing the identification process. Value is added to the understanding of where in the parameter space failure occurs, and how firms might act to move away from financial distress. Further, lenders may find opportunity amongst subsets of firms that are traditionally considered to be in danger of bankruptcy but actually sit in characteristic spaces where failure has not occurred

    Topological Microstructure Analysis Using Persistence Landscapes

    Get PDF
    International audiencePhase separation mechanisms can produce a variety of complicated and intricate microstructures, which often can be difficult to characterize in a quantitative way. In recent years, a number of novel topological metrics for microstructures have been proposed, which measure essential connectivity information and are based on techniques from algebraic topology. Such metrics are inherently computable using computational homology, provided the microstructures are discretized using a thresholding process. However, while in many cases the thresholding is straightforward, noise and measurement errors can lead to misleading metric values. In such situations, persistence landscapes have been proposed as a natural topology metric. Common to all of these approaches is the enormous data reduction, which passes from complicated patterns to discrete information. It is therefore natural to wonder what type of information is actually retained by the topology. In the present paper, we demonstrate that averaged persistence landscapes can be used to recover central system information in the Cahn-Hilliard theory of phase separation. More precisely, we show that topological information of evolving microstructures alone suffices to accurately detect both concentration information and the actual decomposition stage of a data snapshot. Considering that persistent homology only measures discrete connectivity information, regardless of the size of the topological features, these results indicate that the system parameters in a phase separation process affect the topology considerably more than anticipated. We believe that the methods discussed in this paper could provide a valuable tool for relating experimental data to model simulations

    Rigorous cubical approximation and persistent homology of continuous functions

    Get PDF
    International audienceThe interaction between discrete and continuous mathematics lies at the heart of many fundamental problems in applied mathematics and computational sciences. In this paper we discuss the problem of discretizing vector-valued functions defined on finite-dimensional Euclidean spaces in such a way that the discretization error is bounded by a pre-specified small constant. While the approximation scheme has a number of potential applications, we consider its usefulness in the context of computational homology. More precisely, we demonstrate that our approximation procedure can be used to rigorously compute the persistent homology of the original continuous function on a compact domain, up to small explicitly known and verified errors. In contrast to other work in this area, our approach requires minimal smoothness assumptions on the underlying function
    corecore